Current page: Members->List of Members->David R. Bull
 

David R. Bull
University of Bristol
Membership Number: 41
Address: Department of Electrical and Electronic Engineering, University of Bristol, Merchant Venturers Building, Woodland Road, Bristol BS8 1UB, United Kingdom.
Email: Dave.Bull@bristol.ac.uk
Phone: +44 (0) 117 9545195
Fax: +44 (0) 117 9545206
URL: http://www.een.bris.ac.uk/

Biographical Sketch
DAVID BULL is currently Professor of Digital Signal Processing and Head of Department. He leads the Image Communications Group at Bristol and is Deputy Director of the Centre for Communications Research. He has worked widely in the fields of 1-D, 2-D and 3-D signal processing. His current research is focused on the problems of image and video communications including error resilient source coding, linear and nonlinear filterbanks, scalable coding methods, motion estimation and architectural optimisation. He is widely supported in these areas by both industry and EPSRC and has published well over 200 papers. He is a member of the OST Foresight ITEC Committee, the EPSRC Communications College, the Steering Group for the DTI/EPSRC LINK programme in Broadcast Technology and is a past director of the VCE in Digital Broadcasting and Multimedia Technology. He is currently Chairman and Technical Director of ProVision Communication Technologies Ltd.

NISHAN CANAGARAJAH is currently a Reader in Signal Processing at Bristol. Prior to this he was an RA (1993-94) at Bristol and then in 1994 was appointed as a Lecturer. He has BA (Hons) and a PhD in DSP techniques for speech enhancement, both from the University of Cambridge. He was a committee member of the IEE Professional Group E5, a member of the VCE in Digital Broadcasting and Multimedia Technology and an associate editor of the IEE Electronics and Communication Journal. His research interests include image and video coding, content-based indexing and retrieval, 3D video, audio analysis and rendering. He is widely supported in these areas by industry, EU and the EPSRC and published well over 150 papers.

STAVRI NIKOLOV is currently a Research Fellow in Image Communications at Bristol. His research there is focused on medical image fusion, new methods for volume visualisation and navigation, and more recently the use of eye-tracking and VR in 2-D and 3-D image analysis. In the last 10 years he has taken part in several international and national image analysis projects in Austria, Portugal and the UK, in microscopic image processing, analytical data processing, construction of 3-D maps of the seafloor, analysis of volumetric sonar images, image fusion and medical imaging. He has published more than 25 articles in peer reviewed journals and conferences and two invited book chapters in these areas. He is a member of the British Machine Vision Association, the International Society of Information Fusion, ACM SIGGRAPH and associate member of IEEE.

University of Bristol (Image Communications Group, Department of Electronic & Electrical Engineering)
The Image Communications Group (ICG) is part of the Centre for Communication Research (CCR) at the University of Bristol. There are 7 research assistants and PhD students at present working in ICG on projects in the research areas described below:

EYE-TRACKING, GAZE-CONTINGENT DISPLAYS AND VISUAL PERCEPTION The group has state of the art eye-tracking systems for conducting studies on visual perception. Recently some interesting results have been achieved with sign language perception studies with deaf users. These findings are exploited in developing an optimised codec for sign language delivery over mobile networks. Another active area of research is constructing gaze-contingent displays in general (GCD), and gaze-contingent multi-resolution displays (GCMRD) and gaze-contingent multi-modality displays (GCMMD), in particular. Our research efforts are also focussed on the development of foveated compression schemes which will enable real-time video compression and transmission in different applications, e.g. teleconferencing or immersive environments, and using various displays, e.g. hemispheric, stereo, etc. The group has also been looking at how different people (both experts and novices) view and scan various kinds of 2-D and 3-D images by recording their scanpaths and analysing them together with the image content.

ANALYSIS, VISUALISATION AND PERCEPTION OF SPATIO-TEMPORAL INFORMATION IN VIDEO AND IMAGE SEQUENCES A new system for three-dimensional visualisation and interactive exploration of spatio-temporal information in image sequences has been recently developed by the ICG group. Spatio-temporal volumes from video and image sequences can be constructed and studied by generating cuts through, and projections of, the volumes. By making use of direct real-time volume rendering one can interactively view and explore spatio-temporal patterns and structures. This can significantly aid the perception and understanding of spatio-temporal events. It also facilitates the construction of mental kinetic models. The same system is also used to study (4-D) multi-view spatio-temporal data coming from multiple video cameras simultaneously viewing the same object/scene.

3-D CAPTURE AND VIEW SYNTHESIS The requirement for dramatic special effects in film and broadcast material production has led to the increased requirement for hybrid synthetic and natural content, object based decomposition methods and virtual view synthesis. These combined with motion effects based on high frame rate and/or multi-camera capture are beginning to transform film and television content creation. The group has significant expertise in image and video interpolation and the recovery of depth maps from two or more views. Applications for this have included efficient and disparity-accurate view synthesis methods for multi-view autostereoscopic displays, 3-D sprite generation and improved and scalable compression methods for coding of multi-view content. High speed search methods for dense disparity estimation, based on genetic algorithms and wavelet pyramids have been recently proposed.

IMAGE FUSION A number of new algorithms for the fusion of 2-D and 3-D images have been developed by the ICG in the last 5 years. These include novel techniques for image fusion in the wavelet domain, based on multi-scale edge representations or using the dual-tree complex wavelet transform (DT-CWT). Such techniques provide detailed control over the type and amount of information to be fused. The DT-CWT outperforms all known fusion schemes. For the first time, volumetric medical images were fused by combining their 3-D wavelet transforms and new fusion rules have been proposed. New techniques for integrated visualisation of multi-modality volumetric images have been designed. More recent studies investigate optimal parameters of gaze-contingent multi-modality displays (both 2-D and 3-D).

CONTENT-BASED ANALYSIS AND RETRIEVAL The group has recently developed expertise in the field of automated content analysis for storage and retrieval of image sequences from databases and archives. In particular a number of schemes have been developed for video parsing and classification with uncompressed and compressed video sequences. A recent innovation includes the use of complex wavelets for texture analysis and segmentation. A number of feature extraction schemes which are invariant to rotation, scale and illumination have been developed. We are currently developing models for extracting semantic information from low level image features.


Site generated on Friday, 06 January 2006